Hide table of contents

This text is trying to describe the most significant ways in which an influx of money can have adverse effects, and make a few specific suggestions on how to mitigate these effects. You may be interested in it if you are in a position where your decisions about money change the incentive landscape in some significant way.

It is mostly not concerned about optics and PR, which seems to be getting a lot of attention already.  While I started drafting this text before several recent popular posts on funding, and there is some overlap, it covers different observations and considerations, and ultimately aims for a different perspective.

Note in advance several things which are not the purpose here: advise less spending, advise more spending, advise frugality, or express general worries. 

The purpose also isn't to perform a cost-benefit analysis, but just map the "indirect costs of funding". If you wish, suggest some terms which may be part of an impact analysis.

Primary audience which I hope can benefit from it is the fast growing group of EAs who are deciding about substantial amounts of money in various roles such as grant evaluators, regrantors or leads of well funded projects.
 

Adverse effects of funding

The incentive landscape is always there, and makes some things easier or harder to do. The default incentive landscape is not great: in many ways, all of EA is trying to respond to the fact that some problems will not be solved by default.

Adding money into the EA ecosystem changes and will change the landscape. This is great in some ways: some unfortunate gaps can be covered, and important things made easier to do.

However, note that due to the fact we don't know what all the important things are, and adding incentives in one place often changes the landscape in other places, it's unlikely the resulting incentive landscape "with money" makes all the important things easier. In practice it seems likely some important things can actually get harder, and some not very good things are easier.

While overall it's great we have more money to spend on fixing problems, we should also be tracking that money will sometimes change the incentive landscape in ways that are harmful overall. Tracking this, it seems useful to have a list of common negative effects.

These brief adverse effect descriptions are sometimes supplemented by "stylized examples". These examples attempt to make it easier to imagine how the general adverse effects manifests in practice. They do not attempt to describe specific people or existing situations, although they may resemble them
 

Individual epistemic distortion 

Strong incentives can be dangerous to good epistemics. 
From a predictive processing point of view, which is my personal best guess for a simple model of what human brains are doing, cognition lacks clear separation between "beliefs" and "goals". Accordingly, instrumental goals like "getting a large sum of money" do impact beliefs, by default. Research on motivated reasoning supports this claim with tons of empirical evidence. 

  1. Stylized example: Young philosopher applies to an EA outreach program that comes with a large bounty. As a part of the application process, she is required to read a persuasive essay by one of the thought leaders of the EA movement. Part of the young philosopher's mind is highly suspicious of one reasoning step in the essay, but some other part assumes expressing criticism may be at odds with getting the bounty. The critical consideration never really comes to her conscious attention.
  2. Stylized example: An assistant professor in AI wants to have several PhDs funded. Hearing about the abundance of funding for AI safety research, he drafts a grant proposal arguing why the research topic his group would be working on anyway helps not only with AI capabilities, but also with AI alignment. In the process he convinces himself this is the case, and as a next step convinces some of his students.
  3. Stylized example: For a college group community builder, EA community building is the best-paying work opportunity for her age and demonstrable skill level. Even if she has some concerns about the recent developments in the movement, she feels like voicing them too loud could make it harder to get further grants, and thinking about them too much is unproductive. She also feels like the examples of community building the community value most consist of basically getting large amounts of people to become highly engaged EAs, and voicing concerns and uncertainty seem at odds with this metric. 
     

Collective epistemic and attention distortion 

Money interacts with collective sense-making, prioritisation and resource allocation, crowding out the effect of other incentives and considerations that would otherwise guide these processes. This can sometimes result in a worse allocation of resources than the counterfactual.

  1. Stylized example: An infrastructure organisation has access to a large pool of money. A member of the organisation decides to put a large bounty on production of research in an AI-safety agenda which they like, according to their research taste. The attention of many researchers gets pointed towards the bountied agenda.
    X-risk is actually increased, because the original allocation of the researcher's attention based on their original reasoning, recommendations of senior researchers, and prestige gradients was better than being guided by the research taste of members of the well-funded infrastructure organisation.
  2. Stylized example: Salaries in some of the new well-funded projects get ten times higher than for directly x-risk relevant research work in classical academia, creating incentives to move. While this directly improves the new projects, it has hidden costs in long-termist ideas getting worse representation and less attention in academia, which overall still has much more brainpower than the EA research ecosystem.
     

Individual motivational distortion 

While the first order factor here clearly is "people should get paid for their time/work", monetary rewards can have complex effects on people's motivational systems. According to motivation crowding theory, under some circumstances, people can sometimes be less motivated when getting paid for work which they were previously doing voluntarily. 

  • Stylized example: An effective altruist started working on EA after a whole life history of volunteering and work on altruistic activities. They consider altruism a big part of their life, and they were always getting both utils and fuzzies from their work. Suddenly, their income from an EA job increased rapidly, almost to the market rate. They are now emotionally confused whether the monetary compensation means they should internally count less utils and fuzzies, because for the now large monetary compensation, others would be happy to do the same job just for money. As the work is almost fully compensated at market rate in money, it seems, rewarding  oneself with utils and fuzzies is now redundant and fake. They are considering whether they should donate a large part of their EA income back to EA charities, but this feels weird.
     

Adverse effects on culture 

Money can generally corrode the trust and social fabric among “EAs” by e.g. making it harder or less natural to establish interpersonal relationships that are not mediated through the lense of “who receives how much money” or “will collaborating or associating with this person increase my chances of getting funding” or “will becoming friends with this person create conflicts of interest” . Existing EA culture often originated in a time where being an EA was associated with costly signalling, such as donating money, or working on EA projects for small or no monetary compensation. This increased trust in the ecosystem, and made it easy to form high-trust relationships with people. It also had some side effects - as in this culture, trust in relationships was relatively abundant and cheap, many systems got based on cheap, high-trust processes, and on trust networks. It seems likely that a large influx of funds, and paying many people their market rate, is not compatible with this culture.
Newly inaccessible ways of signalling via money can be replaced by ways of signalling which are more costly in other domains. 

  • Stylized example: Years ago, many EA events had more of the vibes of nerds and builders gathering to think about prioritisation, scope, and the future. Now, at least some EA events have more of the vibe of a career fair, and of aggressive networking.

     

Illusion of efficient markets 

Funding can support the illusion we are surrounded by an efficient market, which solves all the subproblems, and these can be now bought as service, and effectively allocates resources  - in contrast to the mindset of constantly noticing the inadequacies and gaps.

  1. Stylized example: An EA org competent at building ops capacity, which would pay off in the long run, has their talent poached by a newly "well funded EA project", willing to pay 10x previous rates for the similar work. This move does not increase ops capacity of the whole EA ecosystem at all, or improve allocation of EA resources. However, from the side of the "well funded EA project" it may look like the problem with operations was effectively solved by "the market" and "use of money".
    1. Note that many "internal EA markets" are currently extremely inefficient - compensation for the same work between different organisations and projects can be different by more than one order. At the same time the compensation does not closely track past or future impact, but often seems random, dependent on who got which grant at what point of time, how much different orgs were willing to engage in competitive bidding for staff, how speculative ideas with bad feedback loops were evaluated in shallow evaluation processes, etc.
       

Feeling of entitlement

Funding can create a feeling of entitlement among those who receive the money, foster a sense of dependency on grantmakers, and lead to division and resentment among those who don't get the money.

  1. Stylized example: Young grant recipient boasts about getting a 6-figure grant. This buys them prestige and social status among their peers. While in fact the grant is rewarding agency and having a plausible idea, it's being taken as a strong signal for great thinking and high levels of competence.
     

Less curiosity, less creativity, less efficiency

 Money can sometimes make people less curious and less likely to come up with creative solutions. Also, money can make people lazy and less likely to come up with cost-effective solutions. 

  1. Stylized example: A person who, counterfactually, would be solving maths puzzles, spends their time on repetitive 1:1 proselytising.
  2. Stylized example: The organisers of a summer program are trying to get the attention of interested people, many of whom are computer geeks, and increase their willingness to fill out the application form. Whereas without a lot of money, the program organisers might be devising interesting programming puzzles that suitable interested people would enjoy solving on their own, with a lot of money, this form of creativity is crowded out by just setting a financial bounty. 
     

Too big to fail projects 

Projects with large budgets can make it harder to “fail gracefully”. Being able to make mistakes appears to be an important thing in people’s growth trajectory.

Stylized example: Young EA decides to start a national EA chapter in his country, which happens to be a developing country with a culture different from the US or UK. He gets a sizable grant, based on hits-based-giving philosophy. After a few months of work on the project, he isn't getting too much traction. His compatriots mostly feel their country has enough of its own problems and poverty, and more esoteric problems with technological x-risk should be dealt with by the rich countries working on such technologies. The young EA realises that localising effective altruism to a non-Western culture in a developing country needs a huge amount of work, more experience, and ideally getting some local thought leaders on board. Personally, the EA would be tempted to postpone this work. However, returning most of the large grant would seem like a failure. Also, a smaller part of the money was already spent with no tangible result. Overall the young EA is quite worried about failing at the large and important task supported by the grant. Also, he is worried this will damage his reputation with the funders.


Optics and PR concerns

These seem to be sufficiently well covered in other posts, and also often the main thing people are worried about. I think this is unfortunate because the other adverse effects described are often bigger. Also, there is a risk of orienting our actions too much by the impression of random bystanders.
 

Some adverse effects are externalities 

What seems common in many of these examples is the negative effects are located somewhere else than the granting and spending, and are affecting some other parties than the funder and the funds recipient. In other words, we can understand them using the concept of negative externalities. (An externality is an effect of a particular economic activity on a third party that is not directly involved in the activity. The impact can be either positive or negative and can arise from either the production or consumption of a good or service.)

This points to a possible partial solution: funders can spend more effort on evaluating the externalities described above in addition to assessing the direct impact for money, and so “price them in”. 
 

Countering measurability bias 

Another commonality among many negative effects seems to be they affect harder to measure values, such as trust, or collective epistemics, while the “direct impact” of some projects is measured in seemingly more measurable terms, such as number of highly engaged EAs, or direct advancement of some project.

This also points to a possible partial solution: funders can ask parts of the EA ecosystem focusing on the harder-to-measure values for opinion, or for evaluations of impact on these harder-to-measure qualities. (For example: CEA's Community Health Team likely has better models of some of the culture and epistemics downside risks than either funders or project founders of e.g. outreach projects.) 
 

Bottom line

The purpose of the preceding is to point out that while EA funding probably on average improves the incentive landscape, in places it can make it worse. Clearly there is no one-size-fits-all solution, but I suspect that often the first step to mitigate adverse effects is to notice them, and the above taxonomy can help. Also, funding decisions could be improved by trying to price-in other factors  besides the direct cost of "how much we spent", and to map some of the costs.

My personal estimate is that the monetary value of assets such as levels of trust or epistemic norms in the EA community is quite high. You can get your own estimate yourself: imagine you could press a button which would, for example, add X billion dollars to the EA-aligned funding, and remove, e.g., random 10% of high-trust interpersonal links in the community.

Also, in my personal experience, most funders are actually already trying to get this right, price in externalities, and so on. This usually isn't easy: I mainly hope this list of categories to check may make it a bit easier. 

This is partially based on ideas and suggestions from Rose Hadhar, Nora Ammann, Miraya Poddar-Agrawal, Owen Cotton-Barratt and others, including OpenAI GTP-3, but they are in no way responsible for the final text. 

Comments3
Sorted by Click to highlight new comments since:

Commenting to say I strongly agree that epistemic and attention distortions are big problems. It already seems like future funds has swayed the ideological center of this movement. 

Would like to see an analysis on how future funds changed the ideological mass distribution of this community. I think you could argue that most shift it caused was simply by changing incentives and not from new information. 

e.g. As someone who has thought EA has underfunded political type stuff for a while, It's been concerning to see people get more interested in (EA) politics and spend so much attention on whether politics is worth it and/or how best to do politics just because someone in the community donated 12M dollars(and because they have high status, which is because they are rich... ).  It's not like SBF is a poly sci expert or wrote a ground breaking cost benefit to convince us(correct me if I'm wrong). He just went on 80k pod and said he thinks politics is a good bet and then dumped the cash trucks. I  understand that even if you disagree w/ flynn campaign you're going to want to comment on how you disagree, but the implication here is if an EA billionaire gives 12M dollars to have people dig holes in the ground (ok it would have to be something a bit more convoluted and or justifiable) it's going to at least cause a bunch of impactful people to spend time thinking about the value prop. 

If EA people think that project is valuable we would hope there focus would not be super conditional on the current funding streams. 

Wanted to make a very small comment on a very small part of this post.

An assistant professor in AI wants to have several PhDs funded. Hearing about the abundance of funding for AI safety research, he drafts a grant proposal arguing why the research topic his group would be working on anyway helps not only with AI capabilities, but also with AI alignment. In the process he convinces himself this is the case, and as a next step convinces some of his students.

Yes, this certainly might be an issue! This particular issue can be mitigated by having funders do lots of grant followups to make sure that differential progress in safety, rather than capabilities, is achieved.

X-Risk Analysis by Dan Hendrycks and Mantas Mazeika provides a good roadmap for doing this. There are also some details in this post (edit since my connection may not have been obvious: I work with Dan and I'm an author of the second post).

Often if no money we are forced to do things the hard way which has the effect of slow and strong building...having money opens faster channels which sometimes are too easy and don't build much.  But frequently money opens doors you didn't have access to. This is a classic problem. I find leadership character is the biggest factor...knowing when to wait and build slower...knowing when to go ahead and pay to move faster or higher quality.  

For example, you can get people to do things because they are passionate to volunteer...now you have a solid compatriot who does it from the heart...maybe the quality is better...for a while...later because they're not getting paid the quality may go down as their life situation forces them to prioritize earning income.  A movement like this with many young people will have a natural evolution...when everyone is young, in school, all will work with passion for free...later many get married...have a child...work/life balance gets harder...now probably you should pay them...but now also you have more funding, so it kind of works out. 

The character issue is biggest right at the juncture between being a poor student and a well funded org...you can finally go out to eat and maybe buy a car...with many decisions, just because you can doesn't mean you should...but sometimes you should. Since this is EA I'm sure character won't be a major factor. Coaching recent windfall recipients is very helpful. Could be a grant funder technique...give a grant and include a coach for free. 

Curated and popular this week
Relevant opportunities